3 research outputs found

    CONDA-PM -- A Systematic Review and Framework for Concept Drift Analysis in Process Mining

    Get PDF
    Business processes evolve over time to adapt to changing business environments. This requires continuous monitoring of business processes to gain insights into whether they conform to the intended design or deviate from it. The situation when a business process changes while being analysed is denoted as Concept Drift. Its analysis is concerned with studying how a business process changes, in terms of detecting and localising changes and studying the effects of the latter. Concept drift analysis is crucial to enable early detection and management of changes, that is, whether to promote a change to become part of an improved process, or to reject the change and make decisions to mitigate its effects. Despite its importance, there exists no comprehensive framework for analysing concept drift types, affected process perspectives, and granularity levels of a business process. This article proposes the CONcept Drift Analysis in Process Mining (CONDA-PM) framework describing phases and requirements of a concept drift analysis approach. CONDA-PM was derived from a Systematic Literature Review (SLR) of current approaches analysing concept drift. We apply the CONDA-PM framework on current approaches to concept drift analysis and evaluate their maturity. Applying CONDA-PM framework highlights areas where research is needed to complement existing efforts.Comment: 45 pages, 11 tables, 13 figure

    Evaluating Explainable Artificial Intelligence Methods Based on Feature Elimination: A Functionality-Grounded Approach

    No full text
    Although predictions based on machine learning are reaching unprecedented levels of accuracy, understanding the underlying mechanisms of a machine learning model is far from trivial. Therefore, explaining machine learning outcomes is gaining more interest with an increasing need to understand, trust, justify, and improve both the predictions and the prediction process. This, in turn, necessitates providing mechanisms to evaluate explainability methods as well as to measure their ability to fulfill their designated tasks. In this paper, we introduce a technique to extract the most important features from a data perspective. We propose metrics to quantify the ability of an explainability method to convey and communicate the underlying concepts available in the data. Furthermore, we evaluate the ability of an eXplainable Artificial Intelligence (XAI) method to reason about the reliance of a Machine Learning (ML) model on the extracted features. Through experiments, we further, prove that our approach enables differentiating explainability methods independent of the underlying experimental settings. The proposed metrics can be used to functionally evaluate the extent to which an explainability method is able to extract the patterns discovered by a machine learning model. Our approach provides a means to quantitatively differentiate global explainability methods in order to deepen user trust not only in the predictions generated but also in their explanations
    corecore